With the increase in health consciousness, noninvasive body monitoring has aroused interest among researchers. As one of the most important pieces of physiological information, researchers have remotely estimated the heart rate (HR) from facial videos in recent years. Although progress has been made over the past few years, there are still some limitations, like the processing time increasing with accuracy and the lack of comprehensive and challenging datasets for use and comparison. Recently, it was shown that HR information can be extracted from facial videos by spatial decomposition and temporal filtering. Inspired by this, a new framework is introduced in this paper to remotely estimate the HR under realistic conditions by combining spatial and temporal filtering and a convolutional neural network. Our proposed approach shows better performance compared with the benchmark on the MMSE-HR dataset in terms of both the average HR estimation and short-time HR estimation. High consistency in short-time HR estimation is observed between our method and the ground truth.
translated by 谷歌翻译
This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics are lower bounded in the offline dataset; put differently, the performance of the existing methods depends on the worst-case propensity in the offline dataset. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions. In this paper, we propose a new algorithm that optimizes lower confidence bounds (LCBs) -- instead of point estimates -- of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein's inequality to unbounded and non-i.i.d. data.
translated by 谷歌翻译
Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.
translated by 谷歌翻译
As a natural extension of the image synthesis task, video synthesis has attracted a lot of interest recently. Many image synthesis works utilize class labels or text as guidance. However, neither labels nor text can provide explicit temporal guidance, such as when an action starts or ends. To overcome this limitation, we introduce semantic video scene graphs as input for video synthesis, as they represent the spatial and temporal relationships between objects in the scene. Since video scene graphs are usually temporally discrete annotations, we propose a video scene graph (VSG) encoder that not only encodes the existing video scene graphs but also predicts the graph representations for unlabeled frames. The VSG encoder is pre-trained with different contrastive multi-modal losses. A semantic scene graph-to-video synthesis framework (SSGVS), based on the pre-trained VSG encoder, VQ-VAE, and auto-regressive Transformer, is proposed to synthesize a video given an initial scene image and a non-fixed number of semantic scene graphs. We evaluate SSGVS and other state-of-the-art video synthesis models on the Action Genome dataset and demonstrate the positive significance of video scene graphs in video synthesis. The source code will be released.
translated by 谷歌翻译
Deep convolutional neural networks have achieved great progress in image denoising tasks. However, their complicated architectures and heavy computational cost hinder their deployments on a mobile device. Some recent efforts in designing lightweight denoising networks focus on reducing either FLOPs (floating-point operations) or the number of parameters. However, these metrics are not directly correlated with the on-device latency. By performing extensive analysis and experiments, we identify the network architectures that can fully utilize powerful neural processing units (NPUs) and thus enjoy both low latency and excellent denoising performance. To this end, we propose a mobile-friendly denoising network, namely MFDNet. The experiments show that MFDNet achieves state-of-the-art performance on real-world denoising benchmarks SIDD and DND under real-time latency on mobile devices. The code and pre-trained models will be released.
translated by 谷歌翻译
激活函数是元素的数学函数,在深神经网络(DNN)中起着至关重要的作用。已经提出了许多新颖和复杂的激活功能来提高DNN的准确性,但在训练过程中还可以通过反向传播消耗大量记忆。在这项研究中,我们提出了嵌套的正向自动分化(正向AD),专门针对用于记忆效率的DNN训练的元素激活函数。我们在两个广泛使用的深度学习框架(Tensorflow和Pytorch)中部署了嵌套的AD,分别支持静态和动态计算图。我们的评估表明,在相同的记忆降低率下,嵌套的前AD嵌套将记忆足迹降低到1.97倍,比基线模型降低了20%。
translated by 谷歌翻译
在本文中,我们提出了一种序列到集合的方法,该方法可以根据最大似然转换任何序列生成模型,转换为设置生成模型,我们可以在其中评估任何集合的效用/概率。设计了一种有效的重要性采样算法,以应对学习序列到集合模型的计算挑战。我们提出GRU2SET,这是我们的序列到集合方法的实例,并采用著名的GRU模型作为序列生成模型。为了进一步获得集合的置换不变表示,我们设计了SETNN模型,这也是序列到集合模型的实例。我们模型的直接应用是从电子商务订单集合中学习订单/集合,这是许多重要的运营决策中的重要步骤,例如快速交付的库存安排。基于与大型集合相比,小型集合通常更易于学习的直觉,我们提出了一个大小偏见的技巧,可以帮助学习相对于$ \ ell_1 $ distance评估度量的更好的设置分布。两个电子商务订单数据集TMALL和HKTVMALL用于进行广泛的实验以显示我们的模型的有效性。实验结果表明,与基准相比,我们的模型可以从订单数据中学习更好的设置/顺序分布。此外,无论我们使用哪种模型,应用大小偏见的技巧总是可以提高从数据中学到的集合分布的质量。
translated by 谷歌翻译
在智能系统(例如自动驾驶和机器人导航)中,轨迹预测一直是一个长期存在的问题。最近在大规模基准测试的最新模型一直在迅速推动性能的极限,主要集中于提高预测准确性。但是,这些模型对效率的强调较少,这对于实时应用至关重要。本文提出了一个名为Gatraj的基于注意力的图形模型,其预测速度要高得多。代理的时空动力学,例如行人或车辆,是通过注意机制建模的。代理之间的相互作用是通过图卷积网络建模的。我们还实施了拉普拉斯混合物解码器,以减轻模式崩溃,并为每个代理生成多种模式预测。我们的模型以在多个开放数据集上测试的更高预测速度与最先进的模型相同的性能。
translated by 谷歌翻译
自动扬声器验证(ASV)已在现实生活中广泛用于身份认证。但是,随着语音转换的快速发展,语音合成算法和记录设备质量的提高,ASV系统很容易受到欺骗攻击。近年来,有关合成和重播语音检测的许多作品,研究人员提出了许多基于手工制作的特征的反欺骗方法,以提高合成和重播语音检测系统的准确性和鲁棒性。但是,使用手工制作的功能而不是原始波形将丢失某些信息进行抗旋转,这将降低系统的检测性能。受图像分类任务中Convnext的有希望的性能的启发,我们将Convnext网络体系结构相应地扩展到SPOOF攻击任务,并提出了端到端的反欺骗模型。通过将扩展体系结构与频道注意块相结合,提出的模型可以专注于最有用的语音表示子频段,以改善反欺骗性的性能。实验表明,对于ASVSPOOF 2019 LA评估数据集和PA评估数据集,我们提出的最佳单个系统可以达到1.88%和2.79%的误差率,这证明了该模型的抗SpoFofing能力。
translated by 谷歌翻译
初始化时(OPAI)的一次性网络修剪是降低网络修剪成本的有效方法。最近,人们越来越相信数据在OPAI中是不必要的。但是,我们通过两种代表性的OPAI方法,即剪切和掌握的消融实验获得了相反的结论。具体而言,我们发现信息数据对于增强修剪性能至关重要。在本文中,我们提出了两种新颖的方法,即判别性的单发网络修剪(DOP)和超级缝制,以通过高级视觉判别图像贴片来修剪网络。我们的贡献如下。(1)广泛的实验表明OPAI是数据依赖性的。(2)超级缝线的性能明显优于基准图像网上的原始OPAI方法,尤其是在高度压缩的模型中。
translated by 谷歌翻译